Revisiting Consistency Regularization for Semi-Supervised Learning

نویسندگان

چکیده

Abstract Consistency regularization is one of the most widely-used techniques for semi-supervised learning (SSL). Generally, aim to train a model that invariant various data augmentations. In this paper, we revisit idea and find enforcing invariance by decreasing distances between features from differently augmented images leads improved performance. However, encouraging equivariance instead, increasing feature distance, further improves To end, propose an consistency framework simple yet effective technique, FeatDistLoss, imposes on classifier level, respectively. Experimental results show our defines new state art across variety standard benchmarks as well imbalanced benchmarks. Particularly, outperform previous work significant margin in low regimes at large imbalance ratios. Extensive experiments are conducted analyze method, code will be published.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Revisiting Semi-Supervised Learning with Graph Embeddings

We present a semi-supervised learning framework based on graph embeddings. Given a graph between instances, we train an embedding for each instance to jointly predict the class label and the neighborhood context in the graph. We develop both transductive and inductive variants of our method. In the transductive variant of our method, the class labels are determined by both the learned embedding...

متن کامل

Semi-Supervised Learning Based on Semiparametric Regularization

Semi-supervised learning plays an important role in the recent literature on machine learning and data mining and the developed semisupervised learning techniques have led to many data mining applications in recent years. This paper addresses the semi-supervised learning problem by developing a semiparametric regularization based approach, which attempts to discover the marginal distribution of...

متن کامل

Semi-supervised Learning by Higher Order Regularization

In semi-supervised learning, at the limit of infinite unlabeled points while fixing labeled ones, the solutions of several graph Laplacian regularization based algorithms were shown by Nadler et al. (2009) to degenerate to constant functions with “spikes” at labeled points in R for d ≥ 2. These optimization problems all use the graph Laplacian regularizer as a common penalty term. In this paper...

متن کامل

Revisiting Embedding Features for Simple Semi-supervised Learning

Recent work has shown success in using continuous word embeddings learned from unlabeled data as features to improve supervised NLP systems, which is regarded as a simple semi-supervised learning mechanism. However, fundamental problems on effectively incorporating the word embedding features within the framework of linear models remain. In this study, we investigate and analyze three different...

متن کامل

Linear Manifold Regularization for Large Scale Semi-supervised Learning

The enormous wealth of unlabeled data in many applications of machine learning is beginning to pose challenges to the designers of semi-supervised learning methods. We are interested in developing linear classification algorithms to efficiently learn from massive partially labeled datasets. In this paper, we propose Linear Laplacian Support Vector Machines and Linear Laplacian Regularized Least...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: International Journal of Computer Vision

سال: 2022

ISSN: ['0920-5691', '1573-1405']

DOI: https://doi.org/10.1007/s11263-022-01723-4